Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
PLoS Comput Biol ; 17(12): e1009681, 2021 12.
Artigo em Inglês | MEDLINE | ID: mdl-34874938

RESUMO

Systems memory consolidation involves the transfer of memories across brain regions and the transformation of memory content. For example, declarative memories that transiently depend on the hippocampal formation are transformed into long-term memory traces in neocortical networks, and procedural memories are transformed within cortico-striatal networks. These consolidation processes are thought to rely on replay and repetition of recently acquired memories, but the cellular and network mechanisms that mediate the changes of memories are poorly understood. Here, we suggest that systems memory consolidation could arise from Hebbian plasticity in networks with parallel synaptic pathways-two ubiquitous features of neural circuits in the brain. We explore this hypothesis in the context of hippocampus-dependent memories. Using computational models and mathematical analyses, we illustrate how memories are transferred across circuits and discuss why their representations could change. The analyses suggest that Hebbian plasticity mediates consolidation by transferring a linear approximation of a previously acquired memory into a parallel pathway. Our modelling results are further in quantitative agreement with lesion studies in rodents. Moreover, a hierarchical iteration of the mechanism yields power-law forgetting-as observed in psychophysical studies in humans. The predicted circuit mechanism thus bridges spatial scales from single cells to cortical areas and time scales from milliseconds to years.


Assuntos
Aprendizagem/fisiologia , Consolidação da Memória/fisiologia , Modelos Neurológicos , Plasticidade Neuronal/fisiologia , Região CA1 Hipocampal/citologia , Região CA1 Hipocampal/fisiologia , Biologia Computacional , Humanos
2.
Neural Comput ; 23(11): 2770-97, 2011 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-21851275

RESUMO

We present a model for the emergence of ordered fiber projections that may serve as a basis for invariant recognition. After invariance transformations are self-organized, so-called control units competitively activate fiber projections for different transformation parameters. The model builds on a well-known ontogenetic mechanism, activity-based development of retinotopy, and it employs activity blobs of varying position and size to install different transformations. We provide a detailed analysis for the case of 1D input and output fields for schematic input patterns that shows how the model is able to develop specific mappings. We discuss results that show that the proposed learning scheme is stable for complex, biologically more realistic input patterns. Finally, we show that the model generalizes to 2D neuronal fields driven by simulated retinal waves.


Assuntos
Modelos Neurológicos , Fibras Nervosas/fisiologia , Vias Visuais/fisiologia , Percepção Visual/fisiologia , Animais , Humanos , Reconhecimento Psicológico/fisiologia
3.
Biol Cybern ; 101(5-6): 401-10, 2009 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-19888596

RESUMO

We study the problem of learning with incomplete information in a student-teacher setup for the committee machine. The learning algorithm combines unsupervised Hebbian learning of a series of associations with a delayed reinforcement step, in which the set of previously learnt associations is partly and indiscriminately unlearnt, to an extent that depends on the success rate of the student on these previously learnt associations. The relevant learning parameter lambda represents the strength of Hebbian learning. A coarse-grained analysis of the system yields a set of differential equations for overlaps of student and teacher weight vectors, whose solutions provide a complete description of the learning behavior. It reveals complicated dynamics showing that perfect generalization can be obtained if the learning parameter exceeds a threshold lambda ( c ), and if the initial value of the overlap between student and teacher weights is non-zero. In case of convergence, the generalization error exhibits a power law decay as a function of the number of examples used in training, with an exponent that depends on the parameter lambda. An investigation of the system flow in a subspace with broken permutation symmetry between hidden units reveals a bifurcation point lambda* above which perfect generalization does not depend on initial conditions. Finally, we demonstrate that cases of a complexity mismatch between student and teacher are optimally resolved in the sense that an over-complex student can emulate a less complex teacher rule, while an under-complex student reaches a state which realizes the minimal generalization error compatible with the complexity mismatch.


Assuntos
Aprendizagem/fisiologia , Modelos Neurológicos , Redes Neurais de Computação , Algoritmos , Inteligência Artificial , Simulação por Computador , Humanos , Matemática , Recompensa
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...